Differentiated services

Differentiated Services or DiffServ is a computer networking architecture that specifies a simple, scalable and coarse-grained mechanism for classifying and managing network traffic and providing Quality of Service (QoS) on modern IP networks. DiffServ can, for example, be used to provide low-latency to critical network traffic such as voice or streaming media while providing simple best-effort service to non-critical services such as web traffic or file transfers.

DiffServ uses the 6-bit Differentiated Services Code Point (DSCP) field in the IP header for packet classification purposes. DSCP replaces the outdated Type of Service (TOS) field.

Contents

Background

Since modern data networks carry many different types of services, including voice, video, streaming music, web pages and email, many of the proposed QoS mechanisms that allowed these services to co-exist were both complex and failed to scale to meet the demands of the public Internet. In December 1998, the IETF published RFC 2474 (An Architecture for Differentiated Services), which replaced the TOS field with the DSCP field. In the DSCP field, a range of eight values (class selector) is used for backward compatibility with IP precedence. Today, DiffServ has largely supplanted Type of Service and other Layer 3 QoS mechanisms, such as integrated services (IntServ), as the primary protocol routers use to provide different levels of service.

Traffic management mechanisms

DiffServ is a coarse-grained, class-based mechanism for traffic management. In contrast, IntServ is a fine-grained, flow-based mechanism.

DiffServ operates on the principle of traffic classification, where each data packet is placed into a limited number of traffic classes, rather than differentiating network traffic based on the requirements of an individual flow. Each router on the network is configured to differentiate traffic based on its class. Each traffic class can be managed differently, ensuring preferential treatment for higher-priority traffic on the network.

While DiffServ does recommend a standardized set of traffic classes,[1], the DiffServ architecture does not incorporate predetermined judgements of what types of traffic should be given priority treatment. DiffServ simply provides a framework to allow classification and differentiated treatment. The standard traffic classes (discussed below) serve to make interoperability between different networks and different vendors' equipment simpler.

DiffServ relies on a mechanism to classify and mark packets as belonging to a specific class. DiffServ-aware routers implement Per-Hop Behaviors (PHBs), which define the packet forwarding properties associated with a class of traffic. Different PHBs may be defined to offer, for example, low-loss or low-latency.

DiffServ domain

A group of routers that implement common, administratively defined DiffServ policies are referred to as a DiffServ domain.

Classification and marking

Network traffic entering a DiffServ domain is subjected to classification and conditioning. Traffic may be classified by many different parameters, such as source address, destination address or traffic type and assigned to a specific traffic class. Traffic classifiers may honor any DiffServ markings in received packets or may elect to ignore or override those markings. Because network operators want tight control over volumes and type of traffic in a given class, it is very rare that the network honors markings at the ingress to the DiffServ domain. Traffic in each class may be further conditioned by subjecting the traffic to rate limiters, traffic policers or shapers.

Per-hop behavior

The Per-Hop Behavior is determined by the differentiated services (DS) field of the IPv4 header or IPv6 header. The DS field consists of a 6-bit differentiated services code point (DSCP).[2] Explicit Congestion Notification occupies the least-significant 2 bits.[3][4][5]

In theory, a network could have up to 64 (i.e. 26) different traffic classes using different markings in the DSCP. The DiffServ RFCs recommend, but do not require, certain encodings. This gives a network operator great flexibility in defining traffic classes. In practice, however, most networks use the following commonly-defined Per-Hop Behaviors:

Default PHB

A default PHB is the only required behavior. Essentially, any traffic that does not meet the requirements of any of the other defined classes is placed in the default PHB. Typically, the default PHB has best-effort forwarding characteristics. The recommended DSCP for the default PHB is '000000' (in binary).

Expedited Forwarding (EF) PHB

The IETF defines Expedited Forwarding behavior in RFC 3246. The EF PHB has the characteristics of low delay, low loss and low jitter. These characteristics are suitable for voice, video and other realtime services. EF traffic is often given strict priority queuing above all other traffic classes. Because an overload of EF traffic will cause queuing delays and affect the jitter and delay tolerances within the class, EF traffic is often strictly controlled through admission control, policing and other mechanisms. Typical networks will limit EF traffic to no more than 30%—and often much less—of the capacity of a link . The recommended DSCP for expedited forwarding is 101110B (46 or 2EH).

Assured Forwarding (AF) PHB group

The IETF defines the Assured Forwarding behavior in RFC 2597 and RFC 3260. Assured forwarding allows the operator to provide assurance of delivery as long as the traffic does not exceed some subscribed rate. Traffic that exceeds the subscription rate faces a higher probability of being dropped if congestion occurs.

The AF behavior group defines four separate AF classes. Within each class, packets are given a drop precedence (high, medium or low). The combination of classes and drop precedence yields twelve separate DSCP encodings from AF11 through AF43 (see table)

Assured Forwarding (AF) Behavior Group
Class 1 Class 2 Class 3 Class 4
Low Drop AF11 (DSCP 10) AF21 (DSCP 18) AF31 (DSCP 26) AF41 (DSCP 34)
Med Drop AF12 (DSCP 12) AF22 (DSCP 20) AF32 (DSCP 28) AF42 (DSCP 36)
High Drop AF13 (DSCP 14) AF23 (DSCP 22) AF33 (DSCP 30) AF43 (DSCP 38)

Some measure of priority and proportional fairness is defined between traffic in different classes. Should congestion occur between classes, the traffic in the higher class is given priority. Rather than using strict priority queueing, more balanced queue servicing algorithms such as fair queueing or weighted fair queuing are likely to be used. If congestion occurs within a class, the packets with the higher drop precedence are discarded first. To prevent issues associated with tail drop, the random early detection (RED), RED for In and Out (RIO) or weighted random early detection (WRED) algorithms are often used to drop packets.

Usually, traffic policing is required to encode drop precedence. Typically, all traffic assigned to a class is initially given a low drop precedence. As the traffic rate exceeds subscription thresholds, the policer will increase the drop precedence of packets that exceed the threshold.

Class selector PHB

Prior to DiffServ, IP networks could use the Precedence field in the Type of Service (TOS) byte of the IP header to mark priority traffic. The TOS byte and IP precedence was not widely used. The IETF agreed to reuse the TOS byte as the DS field for DiffServ networks. In order to maintain backward compatibility with network devices that still use the Precedence field, DiffServ defines the Class Selector PHB.

The Class Selector codepoints are of the form 'xxx000'. The first three bits are the IP precedence bits. Each IP precedence value can be mapped into a DiffServ class. If a packet is received from a non-DiffServ aware router that used IP precedence markings, the DiffServ router can still understand the encoding as a Class Selector codepoint.

Advantages of DiffServ

Under DiffServ, all the policing and classifying is done at the boundaries between DiffServ domains. This means that in the core of the Internet, routers are unhindered by the complexities of collecting payment or enforcing agreements. That is, in contrast to IntServ, DiffServ requires no advance setup, no reservation, and no time-consuming end-to-end negotiation for each flow.

Disadvantages of DiffServ

End-to-end and peering problems

The details of how individual routers deal with the DSCP field is configuration specific, therefore it is difficult to predict end-to-end behaviour. This is complicated further if a packet crosses two or more DiffServ domains before reaching its destination.

From a commercial viewpoint, this is a major flaw, as it means that it is impossible to sell different classes of end-to-end connectivity to end users, as one provider's Gold packet may be another's Bronze. Internet operators could fix this, by enforcing standardised policies across networks, but are not keen on adding new levels of complexity to their already complex peering agreements. One of the reasons for this is set out below.

DiffServ or any other IP based QoS marking does not ensure quality of the service or a specified service level agreement (SLA). By marking the packets, the sender indicates that it wants the packets to be treated as a specific service, but it can only hope that this happens. It is up to all the service providers and their routers in the path to ensure that their policies will take care of the packets in an appropriate fashion.

DiffServ vs. more capacity

Many network engineers and IT professionals believe that the problem addressed by DiffServ should not exist, and instead the capacity of Internet links should be chosen large to be enough to prevent packet loss altogether.[6]

The logic is as follows. Since DiffServ is simply a mechanism for deciding to deliver or route at the expense of others in a situation where there is not enough network capacity, consider that when DiffServ is working by dropping packets selectively, traffic on the link in question must already be very close to saturation. Any further increase in traffic will result in Bronze services being taken out altogether. This will happen on a regular basis if the average traffic on a link is near the limit at which DiffServ becomes needed.

For a few years after the tech wreck of 2001, there was a glut of fibre capacity in most parts of the telecoms market, with it being far easier and cheaper to add more capacity than to employ elaborate DiffServ policies as a way of increasing customer satisfaction. This is what is generally done in the core of the Internet, which is generally fast and dumb with "fat pipes" connecting its routers.

Other IT professionals and network engineers find this logic is flawed in many respects:

  1. The problem of Bronze traffic being starved can be avoided if the network is provisioned to provide a minimum Bronze bandwidth, by limiting the maximum amount of higher priority traffic admitted.
  2. Simple over-provisioning is an inefficient solution, since Internet traffic is highly bursty. If the network is dimensioned to carry all traffic at such times, then it will cost an order of magnitude more than a network dimensioned to carry typical traffic, with traffic management used to prevent collapse during such peaks.
  3. It is not even possible to dimension for "peak load". In particular, when sending a large file, the TCP protocol continues to request more bandwidth as the loss rate decreases, and so it is simply not possible to dimension links to avoid end-to-end loss altogether: increasing the capacity of one link eventually causes loss to occur on a different link.
  4. With wireless links such as EV-DO, where the air-interface bandwidth is several orders of magnitude less than the backhaul, QoS is being used to efficiently deliver VoIP packets where it would not otherwise be achievable.

The issue of the need for traffic shaping and QoS is very real and seen on networks every day. The ability to mark packets and expedite forwarding of time sensitive data gives the system the ability to ride through spikes in bandwidth utilization that are transient in nature and extremely difficult to characterise without utilizing a bandwidth monitor over an extended period.

Effects of dropped packets

Dropping packets wastes the resources that have already been expended in carrying these packets so far through the network. Dropping packets amounts to betting that congestion will have resolved by the time the packets are re-sent, or that (if the dropped packets are TCP datagrams) TCP will throttle back transmission rates at the sources to reduce congestion in the network. The TCP congestion avoidance algorithms are subject to a phenomenon called TCP global synchronization unless special approaches (such as Random early detection) are taken when dropping TCP packets. In Global Synchronization, all TCP streams tend to build up their transmission rates together, reach the peak throughput of the network, and all crash together to a lower rate as packets are dropped, only to repeat the process.

DiffServ as rationing

DiffServ is, for most ISPs, mainly a way of rationing customer network utilisation to allow greater overbooking of their capacity.

Bandwidth broker

RFC 2638 from IETF defines the entity of the Bandwidth Broker in the framework of DiffServ. A Bandwidth Broker is an agent that has some knowledge of an organization's priorities and policies and allocates bandwidth with respect to those policies. In order to achieve an end-to-end allocation of resources across separate domains, the Bandwidth Broker managing a domain will have to communicate with its adjacent peers, which allows end-to-end services to be constructed out of purely bilateral agreements.

DiffServ RFCs

See also

References

External links